<<<<<<< HEAD

Last update: Friday, November 4, 10:02pm ET.

=======

Twitter: @pakremp


Last update: Friday, November 4, 6:40pm ET.

>>>>>>> origin/master

This is a Stan implementation of Drew Linzer’s dynamic Bayesian election forecasting model, with some tweaks to incorporate national poll data, pollster house effects, correlated priors on state-by-state election results and correlated polling errors.

For more details on the original model:

Linzer, D. 2013. “Dynamic Bayesian Forecasting of Presidential Elections in the States.” Journal of the American Statistical Association. 108(501): 124-134. (link)

The Stan and R files are available here.


<<<<<<< HEAD

1305 polls available since April 01, 2016 (including 992 state polls and 313 national polls).


Electoral College

=======

1303 polls available since April 01, 2016 (including 991 state polls and 312 national polls).


Electoral College

>>>>>>> origin/master

Note: the model does not account for the specific electoral vote allocation rules in place in Maine and Nebraska.

National Vote

This graph shows Hillary Clinton’s share of the Clinton and Trump national vote, derived from the weighted average of latent state-by-state vote intentions (using the same state weights as in the 2012 presidential election, adjusted for state adult population growth between 2011 and 2015). In the model (described below), national vote intentions are defined as:

\[\pi^{clinton}[t, US] = \sum_{s \in S} \omega_s \cdot \textrm{logit}^{-1} (\mu_a[t] + \mu_b[t, s])\]

The thick line represents the median of posterior distribution of national vote intentions; the light blue area shows the 90% credible interval. The thin blue lines represent 100 draws from the posterior distribution.

From today to November 8, Hillary Clinton’s share of the national vote is predicted to shrink partially towards the fundamentals-based prior (shown with the dotted black line).

Each national poll (raw numbers, unadjusted for pollster house effects) is represented as a dot (darker dots indicate narrower margins of error). On average, Hillary Clinton’s national poll numbers seem to be running slightly below the level that would be consistent with the latent state-by-state vote intentions.

<<<<<<< HEAD

=======

>>>>>>> origin/master

State Vote

The following graphs show vote intention by state (with 100 draws from the posterior distribution represented as thin blue lines):

\[\pi^{clinton}[t,s] = \textrm{logit}^{-1} (\mu_a[t] + \mu_b[t, s])\]

States are sorted by predicted Clinton score on election day.

<<<<<<< HEAD

Current Vote Intentions and Forecast By State

State-by-State Probabilities

Map

Pollster House Effects

=======

Current Vote Intentions and Forecast By State

State-by-State Probabilities

Map

Pollster House Effects

>>>>>>> origin/master

Most pro-Clinton polls:

<<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master
Poll Origin Median P95 P05
Saint Leo University2.7 1.42.6 1.34.0
Public Religion Research Institute 1.8 0.7 2.9
AP 1.70.3 3.10.4 3.0
Michigan State University 1.7-0.2 3.8
RABA Research 1.6 0.4 2.8-0.3 3.7
RABA Research 1.6 0.4 2.7
GQR 1.4 0.4 2.4
ICITIZEN 1.4 0.1 2.7
McClatchy 1.5 0.0 2.9
GQR 1.4 0.4 2.4
ICITIZEN 1.4 0.1 2.7
WNEU 1.4 -0.2 3.3
WNEU 1.4 -0.3 3.3
Baldwin Wallace University 1.3 -0.6 3.2

Most pro-Trump polls:

<<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master
Poll Origin Median P95 P05
Rasmussen -2.3 -2.9 -1.7
PPIC -1.8 -3.3 -0.3
Remington Research Group -1.8 -2.7 -1.0
UPI -1.8 -2.4 -1.2
UPI -1.8 -2.3 -1.1Clout Research -1.7 -3.7 0.1
Clout Research -1.7-3.6 0.1-3.3 -0.3
Hampton University -1.6 -3.1 -0.2
IBD -1.6-2.6 -0.5-2.7 -0.6
InsideSources -1.6-3.6-3.50.3
Dixie Strategies -1.5-3.0 -0.1-3.1 0.0
Emerson College Polling Society -1.5-3.0 0.0-3.1 -0.1

Discrepancy between national polls and weighted average of state polls

<<<<<<< HEAD

=======

>>>>>>> origin/master

Data

The runmodel.R R script downloads state and national polls from the HuffPost Pollster website as .csv files before processing the data.

The model ignores third-party candidates and undecided voters. I restrict each poll’s sample to respondents declaring vote intentions for Clinton or Trump, so that \(N = N^{clinton} + N^{trump}\). (This is problematic for Utah).

When multiple polls are available by the same pollster, at the same date, and for the same state, I pick polls of likely voters rather than registered voters, and polls for which \(N^{clinton} + N^{trump}\) is the smallest (assuming that these are poll questions in which respondents are given the option to choose a third-party candidate, rather than questions in which respondents are only asked to choose between the two leading candidates).

Polls by the same pollster and of the same state with partially overlapping dates are dropped so that only non-overlapping polls are retained, starting from the most recent poll.

To account for the fact that polls can be conducted over several days, I set the poll date to the midpoint between the day the poll started and the day it ended.

Model

The model is in the file state and national polls.stan. It has a backward component, which aggregates poll history to derive unobserved latent vote intentions; and a forward component, which predicts how these unobserved latent vote intentions will evolve until election day. The backward and forward components are linked through priors about vote intention evolution: in each state, latent vote intentions follow a reverse random walk in which vote intentions “start” on election day \(T\) and evolve in random steps (correlated across states) as we go back in time. The starting point of the reverse random walk is the final state of vote intentions, which is assigned a reasonable prior, based on the Time-for-change, fundamentals-based electoral prediction model. The model reconciles the history of state and national polls with prior beliefs about final election results and about how vote intentions evolve.

Backward Component: Poll Aggregation

For each poll \(i\), the number of respondents declaring they intended to vote for Hillary Clinton \(N^{clinton}_i\) is drawn from a binomial distribution:

\[ N^{clinton}_i \sim \textrm{Binomial}(N_i, \pi^{clinton}_i) \]

where \(N_i\) is poll sample size, and \(\pi^{clinton}_i\) is share of the Clinton vote for this poll.

The model treats national and state polls differently.

State polls

If poll \(i\) is a state poll, I use a day/state/pollster multilevel model:

\[\textrm{logit} (\pi^{clinton}_i) = \mu_a[t_i] + \mu_b[t_i, s_i] + \mu_c[p_i] + u_i + e[s_i]\]

What this model does is simply to decompose the log-odds of reported vote intentions towards Hillary Clinton \(\pi^{clinton}_i\) into a national component, shared across all states (\(\mu_a\)), a state-specific component (\(\mu_b\)), a pollster house effect (\(\mu_c\)), a poll-specific measurement noise term (\(u\)), and a polling error term (\(e\)) shared across all polls of the state (the higher \(e\), the more polls overestimate Hillary Clinton’s true score).

On the day of the last available poll \(t_{last}\), the national component \(\mu_a[t_{last}]\) is set to zero, so that the predicted share of the Clinton vote in state \(s\) (net of pollster house effects and measurement noise) after that date and until election day \(T\) is:

\[\pi^{clinton}_{ts} = \textrm{logit}^{-1} (\mu_b[t, s])\]

To reduce the number of parameters, the model only takes weekly values for \(\mu_b\), so that:

\[\mu_b[t, s] = \mu_b^{weekly}[w_t, s]\]

where \(w_t\) is the week of day \(t\).

National polls

If poll \(i\) is a national poll, I use the same multilevel approach (with random intercepts for pollster house effects \(\mu_c\)) but I add a little tweak: the share of the Clinton vote in a national poll should also reflect the weighted average of state-by-state scores at the time of the poll. I model the share of vote intentions in national polls in the following way:

\[\textrm{logit} (\pi^{clinton}_i) = \textrm{logit}\left( \sum_{s \in \{1 \dots S\}} \omega_s \cdot \textrm{logit}^{-1} (\mu_a[t_i] + \mu_b^{weekly}[w_{t_i}, s] + e[s]) \right) + \alpha + \mu_c[p_i] + u_i\]

where \(\omega_s\) represents the share of state \(s\) in the total votes of the set of polled states \(1 \dots S\) (based on 2012 turnout numbers adjusted for adult population growth in each state between 2011 and 2015). The \(\alpha\) parameter corrects for possible discrepancies between national polls and the weighted average of state polls. Possible sources of discrepancies may include:

  • the fact that when polls are not available for all states, polled states can be on average more blue or more red than the country as a whole (not a problem since the first 50-state Washington Post/SurveyMonkey poll in early September);
  • changes in state weights since 2012;
  • any possible (time-invariant) bias in national polls relative to state polls.

The idea is that while national poll levels may be off and generally not very indicative of the state of the race, national poll changes may contain valuable information to update \(\mu_a\) and (to a lesser extent) \(\mu_b\) parameters.

How vote intentions evolve

In order to smooth out vote intentions by state and obtain latent vote intentions at dates in which no polls were conducted, I use 2 reverse random walk priors for \(\mu_a\) and \(\mu_b^{weekly}\) from \(t_{last}\) to April 1:

\[\mu_b^{weekly}[w_t-1, s] \sim \textrm{Normal}(\mu_b^{weekly}[w_t, s], \sigma_b \cdot \sqrt{7})\]

\[\mu_a[t-1] \sim \textrm{Normal}(\mu_a[t], \sigma_a)\]

Both \(\sigma_a\) and \(\sigma_b\) are given uniform priors between 0 and 0.05.

Their posterior marginal distributions are shown below. The median day-to-day total standard deviation of vote intentions is about 0.4%. The model seems to find that most of the changes in latent vote intentions are attributable to national swings rather than state-specific swings (national swings account on average for about 91% of the total day-to-day variance).

<<<<<<< HEAD

=======

>>>>>>> origin/master

Forward Component: Vote Intention Forecast

Final outcome

I use a multivariate normal distribution for the prior of the final outcome. Its mean is based on the Time-for-Change model – which predicts that Hillary Clinton should receive 48.6% of the national vote (based on Q2 GDP figures, the current President’s approval rating and number of terms). The prior expects state final scores to remain on average centered around \(48.6\% + \delta_s\), where \(\delta_s\) is the excess Obama performance relative to the national vote in 2012 in state \(s\).

\[\mu_b[T, 1 \dots S] \sim \textrm{Multivariate Normal}(\textrm{logit} (0.486 + \delta_{1 \dots S}), \mathbf{\Sigma})\]

For the covariance matrix \(\mathbf{\Sigma}\), I set the variance to 0.05 and the covariance to 0.025 for all states and pairs of states – which corresponds to a correlation coefficient of 0.5 across states.

  • This prior is relatively imprecise as to the expected final scores in any given state; for example, in a state like Virginia, which Obama won by 52% in 2012 (a score identical to his national score), Hillary Clinton is expected to get 48.6% of the vote, with a 95% certainty that her score will not fall below 38% or exceed 59%.

  • State scores are also expected to be correlated with each other. For example, according to the prior (before looking at polling data), there is only a 3.4% chance that Hillary Clinton will perform worse in Virginia than in Texas. If the priors were independent, this unlikely event could happen with a 10% probability.

<<<<<<< HEAD

=======

>>>>>>> origin/master

The covariance matrix implies that the correlation between the 2012 state scores and 2016 state priors is expected to be about 0.94 (as opposed to 0.89 if covariances were set to zero). The simulated distribution of correlations between state priors and 2012 scores is in line with observed correlations of state scores with previous election results since 1988 [http://election.princeton.edu/2016/06/02/the-realignment-myth/].

To put it differently, the model does not have a very precise prior about final scores, but it does assume that most of this uncertainty is attributable to national-level swings in vote intentions.

How vote intentions evolve

From election day to the date of the latest available poll \(t_{last}\), vote intentions by state “start” at \(\mu_b[T,s]\) and follow a random walk with correlated steps across states:

\[\mu_b^{weekly}[w_t-1, 1 \dots S] \sim \textrm{Multivariate Normal}(\mu_b^{weekly}[w_t, 1 \dots S], \mathbf{\Sigma_b^{walk}})\]

<<<<<<< HEAD

I set \(\mathbf{\Sigma_b^{walk}}\) so that all variances equal \(0.015^2 \times 7\) and all covariances equal 0.00118 (\(\rho =\) 0.75). This implies a 0.4% standard deviation in daily vote intentions changes in a state where Hillary Clinton’s score is close to 50%. To put it differently, the prior is 95% confident that Hillary Clinton’s score in any given state where she is currently polling around 50% should not move up or down by more than 1.3% over the remaining 3 days until the election.

=======

I set \(\mathbf{\Sigma_b^{walk}}\) so that all variances equal \(0.015^2 \times 7\) and all covariances equal 0.00118 (\(\rho =\) 0.75). This implies a 0.4% standard deviation in daily vote intentions changes in a state where Hillary Clinton’s score is close to 50%. To put it differently, the prior is 95% confident that Hillary Clinton’s score in any given state where she is currently polling around 50% should not move up or down by more than 1.5% over the remaining 4 days until the election.

>>>>>>> origin/master

Poll house effects

Each pollster \(p\) can be biased towards Clinton or Trump:

\[\mu_c[p] \sim \textrm{Normal}(0, \sigma_c)\]

\[\sigma_c \sim \textrm{Uniform}(0, 0.1)\]

Discrepancy between national polls and the average of state polls

I give the \(\alpha\) parameter a prior centered around the observed distance of polled state voters from the national vote in 2012 (this was useful until early September, when lots of solid red states had still not been polled and the average polled state voter was more pro-Clinton than the average US voter.):

\[\bar{\delta_S} = \sum_{s \in \{1 \dots S\}} \omega_s \cdot \pi^{obama'12}_s - \pi^{obama'12}\]

\[\alpha \sim \textrm{Normal}(\textrm{logit} (\bar{\delta_S}), 0.2)\]

Measurement noise

The measurement noise term \(u_i\) is normally distributed around zero, with standard error \(\sigma_u^{national}\) for national polls, and \(\sigma_u^{state}\) for state polls. I give both standard errors a uniform distribution between 0 and 0.10.

\[\sigma_u^{national} \sim \textrm{Uniform}(0, 0.1)\] \[\sigma_u^{state} \sim \textrm{Uniform}(0, 0.1)\]

Polling error

To account for the possibility that polls might be off on average, even after adjusting for pollster house effects, the model includes a polling error term shared by all polls of the same state \(e[s]\). For example, the presence of an unexpectedly large share of Trump voters (undetected by the polls) in a given state would translate into large positive \(e\) values for that state. This polling error will remain unknown until election day; however it can be included in the form of an unidentified random parameter in the likelihood of the model, that increases the uncertainty in the posterior distribution of \(\mu_a\) and \(\mu_b\).

Because I expect polling errors to be correlated across states, I use a multivariate normal distribution:

\[e \sim \textrm{Multivariate Normal}(0, \mathbf{\Sigma_e})\]

To construct \(\mathbf{\Sigma_e}\), I set the variance to \(0.04^2\) and the covariance to 0.00175; this corresponds to a standard deviation of about 1 percentage point for a state in which Clinton’s score is close to 50% (or a 95% certainty that polls are not off by more than 2 percentage points either way); and a 0.7 correlation of polling errors across states.


Recently added polls

<<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master <<<<<<< HEAD ======= >>>>>>> origin/master
Entry Date Source State % Clinton / (Clinton + Trump) % Trump / (Clinton + Trump) N (Clinton + Trump)
2016-11-04 ABC 52.2 47.8 127751.6 48.4 1047
2016-11-04 FOX 51.1 48.9 974
2016-11-04 Gravis Marketing 50.5 49.5 4878
2016-11-04 IBD 50.0 50.0 790
2016-11-04 Ipsos 54.3 45.7 1637
2016-11-04 Lucid 53.0 47.0 725
2016-11-04 Rasmussen 50.0 50.0 1320
2016-11-04 UPI 50.5 49.5 1353
2016-11-04 SurveyMonkey AK 43.2 56.8 271
2016-11-04 SurveyMonkey AL 40.9 59.1 635
2016-11-04 SurveyMonkey AR 38.2 61.8 619
2016-11-04 Data Orbital AZ 45.3 54.7 473
2016-11-04 SurveyMonkey AZ 50.6 49.4 1521
2016-11-04 SurveyMonkey CA 65.9 34.1 2149
2016-11-04 USC CA 64.3 35.7 1161
2016-11-04 Keating CO 53.1 46.9 490
2016-11-04 PPP CO 52.7 47.3 641
2016-11-04
2016-11-04 Keating CO 53.1 46.9 490
2016-11-04 PPP CO 52.7 47.3 641
2016-11-04SurveyMonkey CO 52.4 47.6 1619
2016-11-04 SurveyMonkey CT 57.8 42.2 832
2016-11-04 SurveyMonkey DE 58.1 41.9 348
2016-11-04 Gravis Marketing FL 51.6 48.4 1895
2016-11-04 SurveyMonkey FL 51.1 48.9 3088
2016-11-04 Opinion Savvy GA 47.9 52.1 506
2016-11-04 SurveyMonkey GA 48.9 51.1 2585
2016-11-04 SurveyMonkey HI 64.2 35.8 352
2016-11-04 RABA Research IA 48.2 51.8 915
2016-11-04 SurveyMonkey IA 44.0 56.0 1234
2016-11-04 SurveyMonkey ID 38.0 62.0 393
2016-11-04 SurveyMonkey IL 58.4 41.6 997
2016-11-04 Gravis Marketing IN 44.3 55.7 351
2016-11-04 SurveyMonkey IN 40.2 59.8 803
2016-11-04 Fort Hays State University KS 45.3 54.7 767
2016-11-04 Fort Hays State University KS 37.0 63.0 288
2016-11-04 SurveyMonkey KS 42.4 57.6 988
2016-11-04 Fort Hays State University KS 45.3 54.7 767
2016-11-04 SurveyMonkeyKY 38.9 61.1 759KS 42.4 57.6 988
2016-11-04 SurveyMonkeyLA 41.6 58.4 575KY 38.9 61.1 759
2016-11-04 SurveyMonkeyMA 65.9 34.1 853
2016-11-04 WNEU MA 68.3 31.7 342
2016-11-04LA 41.6 58.4 575
2016-11-04 SurveyMonkey MA 65.9 34.1 853
2016-11-04 WNEU MA 68.3 31.7 342
2016-11-04SurveyMonkey MD 69.7 30.3 757
2016-11-04 SurveyMonkey ME 55.2 44.8 513
2016-11-04 PPP MI 52.9 47.1 833
2016-11-04
2016-11-04 PPP MI 52.9 47.1 833
2016-11-04SurveyMonkey MI 50.6 49.4 2055
2016-11-04 SurveyMonkey MN 56.0 44.0 886
2016-11-04 PPP MO 44.1 55.9 1007
2016-11-04 SurveyMonkey MO 44.3 55.7 774
2016-11-04 PPP MO 44.1 55.9 1007
2016-11-04 SurveyMonkeyMS 44.9 55.1 599
2016-11-04MO 44.3 55.7 774
2016-11-04 SurveyMonkey MS 44.9 55.1 599
2016-11-04SurveyMonkey MT 39.1 60.9 351
2016-11-04 PPP NC 51.0 49.0 1122
2016-11-04 PPP NC 51.0 49.0 1122
2016-11-04 SurveyMonkey NC 53.3 46.7 2063
2016-11-04 SurveyMonkeyNC 53.3 46.7 2063ND 34.1 65.9 235
2016-11-04 SurveyMonkeyND 34.1 65.9 235
2016-11-04 SurveyMonkey NE 40.7 59.3 608
2016-11-04 Gravis Marketing NH 48.8 51.2 841
2016-11-04 PPP NH 52.7 47.3 711
2016-11-04 SurveyMonkey NH 56.0 44.0 564
2016-11-04 UMass Lowell NH 50.0 50.0 612
2016-11-04NE 40.7 59.3 608
2016-11-04 Gravis Marketing NH 48.8 51.2 841
2016-11-04 PPP NH 52.7 47.3 711
2016-11-04 SurveyMonkey NH 56.0 44.0 564
2016-11-04 UMass Lowell NH 50.0 50.0 612
2016-11-04Stockton NJ 56.0 44.0 617
2016-11-04 SurveyMonkey NJ 59.8 40.2 868
2016-11-04 SurveyMonkey NM 53.2 46.8 611
2016-11-04 ZiaPoll NM 51.7 48.3 981
2016-11-04 PPP NV 51.6 48.4 640
2016-11-04 SurveyMonkey NV 49.4 50.6 884
2016-11-04 SurveyMonkey NY 64.0 36.0 1735
2016-11-04 SurveyMonkey OH 47.1 52.9 1743
2016-11-04 SurveyMonkey OK 36.4 63.6 796
2016-11-04 SurveyMonkey OR 58.6 41.4 1000
2016-11-04 PPP PA 52.2 47.8 966
2016-11-04 SurveyMonkey PA 51.7 48.3 2184
2016-11-04 SurveyMonkey RI 56.3 43.7 341
2016-11-04 SurveyMonkey SC 47.8 52.2 1425
2016-11-04 SurveyMonkey SD 37.3 62.7 326
2016-11-04 SurveyMonkey TN 45.6 54.4 957
2016-11-04 SurveyMonkey TX 47.2 52.8 2067
2016-11-04 SurveyMonkey UT 47.6 52.4 836
2016-11-04 Y2 Analytics UT 42.1 57.9 285
2016-11-04 PPP VA 52.7 47.3 1127
2016-11-04 Roanoke College VA 54.2 45.8 543
2016-11-04 SurveyMonkey VA 54.5 45.5 1824
2016-11-04 SurveyMonkey VT 70.4 29.6 364
2016-11-04 SurveyMonkey WA 60.9 39.1 821
2016-11-04 Loras College WI 53.7 46.3 410
2016-11-04 PPP WI 53.9 46.1 793
2016-11-04 SurveyMonkey WI 51.1 48.9 1380
2016-11-04 SurveyMonkey WV 32.1 67.9 324
2016-11-04 ZiaPoll NM 51.7 48.3 981
2016-11-04 PPP NV 51.6 48.4 640
2016-11-04 SurveyMonkey NV 49.4 50.6 884
2016-11-04 SurveyMonkey NY 64.0 36.0 1735
2016-11-04 SurveyMonkey OH 47.1 52.9 1743
2016-11-04 SurveyMonkey OK 36.4 63.6 796
2016-11-04 SurveyMonkey OR 58.6 41.4 1000
2016-11-04 PPP PA 52.2 47.8 966
2016-11-04 SurveyMonkey PA 51.7 48.3 2184
2016-11-04 SurveyMonkey RI 56.3 43.7 341
2016-11-04 SurveyMonkey SC 47.8 52.2 1425
2016-11-04 SurveyMonkey SD 37.3 62.7 326
2016-11-04 SurveyMonkey TN 45.6 54.4 957
2016-11-04 SurveyMonkey TX 47.2 52.8 2067
2016-11-04 SurveyMonkey UT 47.6 52.4 836
2016-11-04 Y2 Analytics UT 42.1 57.9 285
2016-11-04 PPP VA 52.7 47.3 1127
2016-11-04 Roanoke College VA 54.2 45.8 543
2016-11-04 SurveyMonkey VA 54.5 45.5 1824
2016-11-04 SurveyMonkey VT 70.4 29.6 364
2016-11-04 SurveyMonkey WA 60.9 39.1 821
2016-11-04 Loras College WI 53.7 46.3 410
2016-11-04 PPP WI 53.9 46.1 793
2016-11-04 SurveyMonkey WI 51.1 48.9 1380
2016-11-04 SurveyMonkey WV 32.1 67.9 324
2016-11-03 CBS 51.6 48.4 1213
2016-11-03 U of Arkansas AR 37.8 62.2 480
2016-11-03 NBC AZ 47.1 52.9 611
2016-11-03 Saguaro AZ 50.6 49.4 1984
2016-11-03 Field CA 61.6 38.4 1288
2016-11-03 Magellan CO 53.7 46.3 410
2016-11-03 U Colorado Boulder CO 56.4 43.6 783
2016-11-03 University of Denver CO 50.0 50.0 429
2016-11-03 Dixie Strategies FL 47.7 52.3 614
2016-11-03 Opinion Savvy FL 52.1 47.9 567
2016-11-03 NBC GA 49.4 50.6 629
2016-11-03 ARG NH 47.3 52.7 546
2016-11-03 MassINC NH 49.4 50.6 395
2016-11-03 Suffolk NH 50.0 50.0 420
2016-11-03 Dixie Strategies TX 45.8 54.2 647
2016-11-03 Dixie Strategies TX 42.9 57.1 892
2016-11-03 NBC TX 44.9 55.1 604
2016-11-03 Monmouth University UT 45.6 54.4 273
2016-11-03 Rasmussen UT 42.5 57.5 548

Convergence checks

With 4 chains and 2000 iterations (the first 1000 iterations of each chain are discarded), the model runs in less than 15 minutes on my 4-core Intel i7 MacBookPro.

##  [1] "Inference for Stan model: state and national polls."                          
##  [2] "4 chains, each with iter=2000; warmup=1000; thin=1; "                         
##  [3] "post-warmup draws per chain=1000, total post-warmup draws=4000."              
##  [4] ""                                                                             
##  [5] "                   mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat"
<<<<<<< HEAD
##  [6] "alpha             -0.01       0 0.01 -0.03 -0.02 -0.01 -0.01  0.00  2366 1.00"
##  [7] "sigma_c            0.05       0 0.01  0.04  0.05  0.05  0.06  0.07  1075 1.00"
##  [8] "sigma_u_state      0.05       0 0.00  0.04  0.05  0.05  0.06  0.06   991 1.00"
##  [9] "sigma_u_national   0.02       0 0.01  0.00  0.01  0.02  0.02  0.03   717 1.00"
## [10] "sigma_walk_a_past  0.02       0 0.00  0.01  0.02  0.02  0.02  0.02  1424 1.00"
## [11] "sigma_walk_b_past  0.01       0 0.00  0.00  0.00  0.01  0.01  0.01   945 1.01"
## [12] "mu_b[33,2]        -0.24       0 0.07 -0.38 -0.28 -0.24 -0.19 -0.10  3654 1.00"
## [13] "mu_b[33,3]        -0.41       0 0.07 -0.54 -0.46 -0.41 -0.37 -0.28  4000 1.00"
## [14] "mu_b[33,4]        -0.43       0 0.07 -0.56 -0.47 -0.43 -0.38 -0.30  3563 1.00"
## [15] "mu_b[33,5]        -0.07       0 0.06 -0.19 -0.11 -0.07 -0.03  0.05  2875 1.00"
## [16] "mu_b[33,6]         0.60       0 0.06  0.47  0.55  0.60  0.64  0.71  3547 1.00"
## [17] "mu_b[33,7]         0.09       0 0.06 -0.03  0.05  0.09  0.13  0.21  3253 1.00"
## [18] "mu_b[33,8]         0.28       0 0.07  0.15  0.24  0.28  0.33  0.41  3083 1.00"
## [19] "mu_b[33,9]         0.36       0 0.07  0.21  0.31  0.36  0.41  0.50  4000 1.00"
## [20] "mu_b[33,10]        0.03       0 0.06 -0.08 -0.01  0.03  0.07  0.14  3514 1.00"
## [21] "mu_b[33,11]       -0.09       0 0.06 -0.21 -0.13 -0.09 -0.05  0.03  3618 1.00"
## [22] "mu_b[33,12]        0.62       0 0.08  0.47  0.57  0.62  0.68  0.78  4000 1.00"
## [23] "mu_b[33,13]       -0.09       0 0.06 -0.21 -0.13 -0.09 -0.05  0.03  3309 1.00"
## [24] "mu_b[33,14]       -0.56       0 0.07 -0.70 -0.61 -0.56 -0.51 -0.42  4000 1.00"
## [25] "mu_b[33,15]        0.37       0 0.06  0.24  0.33  0.37  0.41  0.49  3709 1.00"
## [26] "mu_b[33,16]       -0.32       0 0.06 -0.44 -0.36 -0.31 -0.27 -0.19  4000 1.00"
## [27] "mu_b[33,17]       -0.33       0 0.06 -0.45 -0.37 -0.33 -0.29 -0.21  3261 1.00"
## [28] "mu_b[33,18]       -0.51       0 0.07 -0.64 -0.55 -0.51 -0.46 -0.38  3754 1.00"
## [29] "mu_b[33,19]       -0.36       0 0.06 -0.49 -0.40 -0.36 -0.32 -0.24  3665 1.00"
## [30] "mu_b[33,20]        0.59       0 0.06  0.46  0.55  0.59  0.63  0.72  3713 1.00"
## [31] "mu_b[33,21]        0.66       0 0.07  0.53  0.62  0.66  0.70  0.79  3294 1.00"
## [32] "mu_b[33,22]        0.17       0 0.07  0.05  0.13  0.17  0.22  0.30  3534 1.00"
## [33] "mu_b[33,23]        0.10       0 0.06 -0.02  0.06  0.10  0.14  0.22  3535 1.00"
## [34] "mu_b[33,24]        0.16       0 0.07  0.03  0.12  0.16  0.20  0.29  3656 1.00"
## [35] "mu_b[33,25]       -0.23       0 0.06 -0.35 -0.27 -0.23 -0.19 -0.11  3872 1.00"
## [36] "mu_b[33,26]       -0.24       0 0.07 -0.38 -0.29 -0.24 -0.20 -0.11  4000 1.00"
## [37] "mu_b[33,27]       -0.39       0 0.07 -0.53 -0.44 -0.39 -0.34 -0.24  4000 1.00"
## [38] "mu_b[33,28]        0.03       0 0.06 -0.09 -0.01  0.03  0.07  0.14  2839 1.00"
## [39] "mu_b[33,29]       -0.51       0 0.08 -0.66 -0.56 -0.50 -0.45 -0.36  4000 1.00"
## [40] "mu_b[33,30]       -0.41       0 0.07 -0.54 -0.45 -0.41 -0.36 -0.27  4000 1.00"
## [41] "mu_b[33,31]        0.08       0 0.06 -0.03  0.04  0.08  0.12  0.20  3137 1.00"
## [42] "mu_b[33,32]        0.34       0 0.07  0.21  0.30  0.34  0.38  0.47  3780 1.00"
## [43] "mu_b[33,33]        0.17       0 0.06  0.05  0.13  0.17  0.21  0.29  3803 1.00"
## [44] "mu_b[33,34]        0.01       0 0.06 -0.11 -0.03  0.01  0.05  0.12  3271 1.00"
## [45] "mu_b[33,35]        0.51       0 0.07  0.38  0.47  0.51  0.55  0.64  3332 1.00"
## [46] "mu_b[33,36]       -0.05       0 0.06 -0.17 -0.09 -0.05  0.00  0.07  2947 1.00"
## [47] "mu_b[33,37]       -0.57       0 0.07 -0.70 -0.61 -0.57 -0.52 -0.43  4000 1.00"
## [48] "mu_b[33,38]        0.24       0 0.06  0.11  0.19  0.24  0.28  0.36  3470 1.00"
## [49] "mu_b[33,39]        0.10       0 0.06 -0.02  0.06  0.10  0.14  0.21  2899 1.00"
## [50] "mu_b[33,40]        0.31       0 0.08  0.16  0.26  0.32  0.37  0.47  4000 1.00"
## [51] "mu_b[33,41]       -0.13       0 0.06 -0.26 -0.18 -0.13 -0.09  0.00  3729 1.00"
## [52] "mu_b[33,42]       -0.40       0 0.08 -0.56 -0.45 -0.40 -0.35 -0.25  4000 1.00"
## [53] "mu_b[33,43]       -0.35       0 0.07 -0.48 -0.39 -0.35 -0.31 -0.22  3596 1.00"
## [54] "mu_b[33,44]       -0.21       0 0.06 -0.33 -0.25 -0.21 -0.17 -0.09  2833 1.00"
## [55] "mu_b[33,45]       -0.34       0 0.06 -0.46 -0.38 -0.34 -0.29 -0.21  3809 1.00"
## [56] "mu_b[33,46]        0.15       0 0.06  0.03  0.11  0.15  0.19  0.26  3535 1.00"
## [57] "mu_b[33,47]        0.76       0 0.08  0.62  0.71  0.77  0.82  0.91  4000 1.00"
## [58] "mu_b[33,48]        0.31       0 0.07  0.18  0.27  0.31  0.36  0.44  3450 1.00"
## [59] "mu_b[33,49]        0.10       0 0.06 -0.01  0.06  0.10  0.14  0.21  3767 1.00"
## [60] "mu_b[33,50]       -0.59       0 0.07 -0.73 -0.64 -0.59 -0.54 -0.45  4000 1.00"
## [61] "mu_b[33,51]       -0.92       0 0.08 -1.09 -0.98 -0.93 -0.87 -0.76  4000 1.00"
## [62] ""                                                                             
## [63] "Samples were drawn using NUTS(diag_e) at Sat Nov 05 02:59:48 2016."           
=======
##  [6] "alpha             -0.01       0 0.01 -0.03 -0.02 -0.01 -0.01  0.00  2482    1"
##  [7] "sigma_c            0.05       0 0.01  0.04  0.05  0.05  0.06  0.06  1144    1"
##  [8] "sigma_u_state      0.05       0 0.00  0.04  0.05  0.05  0.05  0.06  1187    1"
##  [9] "sigma_u_national   0.02       0 0.01  0.00  0.01  0.02  0.02  0.03   732    1"
## [10] "sigma_walk_a_past  0.02       0 0.00  0.01  0.02  0.02  0.02  0.02  1296    1"
## [11] "sigma_walk_b_past  0.01       0 0.00  0.00  0.00  0.01  0.01  0.01  1057    1"
## [12] "mu_b[33,2]        -0.24       0 0.07 -0.37 -0.28 -0.24 -0.19 -0.10  3133    1"
## [13] "mu_b[33,3]        -0.41       0 0.07 -0.54 -0.45 -0.41 -0.36 -0.28  2730    1"
## [14] "mu_b[33,4]        -0.43       0 0.07 -0.57 -0.47 -0.43 -0.38 -0.29  3293    1"
## [15] "mu_b[33,5]        -0.07       0 0.06 -0.19 -0.11 -0.07 -0.03  0.05  3058    1"
## [16] "mu_b[33,6]         0.60       0 0.06  0.47  0.55  0.59  0.64  0.72  2863    1"
## [17] "mu_b[33,7]         0.09       0 0.06 -0.02  0.05  0.09  0.13  0.20  3067    1"
## [18] "mu_b[33,8]         0.28       0 0.07  0.16  0.24  0.28  0.33  0.41  2935    1"
## [19] "mu_b[33,9]         0.36       0 0.07  0.22  0.31  0.36  0.41  0.50  3536    1"
## [20] "mu_b[33,10]        0.03       0 0.06 -0.08 -0.01  0.03  0.07  0.15  2733    1"
## [21] "mu_b[33,11]       -0.09       0 0.06 -0.21 -0.13 -0.09 -0.05  0.03  2816    1"
## [22] "mu_b[33,12]        0.62       0 0.08  0.47  0.57  0.62  0.68  0.78  4000    1"
## [23] "mu_b[33,13]       -0.09       0 0.06 -0.21 -0.13 -0.09 -0.04  0.03  2922    1"
## [24] "mu_b[33,14]       -0.56       0 0.07 -0.70 -0.61 -0.56 -0.51 -0.42  3111    1"
## [25] "mu_b[33,15]        0.37       0 0.06  0.24  0.33  0.37  0.41  0.50  2947    1"
## [26] "mu_b[33,16]       -0.31       0 0.06 -0.44 -0.36 -0.32 -0.27 -0.19  2928    1"
## [27] "mu_b[33,17]       -0.32       0 0.07 -0.44 -0.36 -0.31 -0.27 -0.19  3280    1"
## [28] "mu_b[33,18]       -0.51       0 0.07 -0.64 -0.55 -0.50 -0.46 -0.38  2921    1"
## [29] "mu_b[33,19]       -0.36       0 0.06 -0.49 -0.41 -0.36 -0.32 -0.23  3435    1"
## [30] "mu_b[33,20]        0.59       0 0.06  0.47  0.55  0.59  0.64  0.72  3024    1"
## [31] "mu_b[33,21]        0.66       0 0.07  0.53  0.62  0.66  0.70  0.80  2801    1"
## [32] "mu_b[33,22]        0.17       0 0.07  0.04  0.13  0.17  0.22  0.30  2882    1"
## [33] "mu_b[33,23]        0.10       0 0.06 -0.02  0.06  0.10  0.14  0.22  2804    1"
## [34] "mu_b[33,24]        0.16       0 0.06  0.04  0.12  0.16  0.21  0.29  3148    1"
## [35] "mu_b[33,25]       -0.23       0 0.06 -0.35 -0.27 -0.23 -0.19 -0.11  3322    1"
## [36] "mu_b[33,26]       -0.24       0 0.07 -0.38 -0.29 -0.24 -0.20 -0.11  2943    1"
## [37] "mu_b[33,27]       -0.39       0 0.07 -0.53 -0.44 -0.39 -0.34 -0.24  3412    1"
## [38] "mu_b[33,28]        0.03       0 0.06 -0.09 -0.01  0.03  0.07  0.14  3090    1"
## [39] "mu_b[33,29]       -0.51       0 0.08 -0.66 -0.56 -0.51 -0.45 -0.35  4000    1"
## [40] "mu_b[33,30]       -0.41       0 0.07 -0.55 -0.45 -0.41 -0.36 -0.26  3151    1"
## [41] "mu_b[33,31]        0.09       0 0.06 -0.03  0.05  0.08  0.12  0.20  2963    1"
## [42] "mu_b[33,32]        0.34       0 0.06  0.22  0.30  0.34  0.38  0.47  3154    1"
## [43] "mu_b[33,33]        0.17       0 0.06  0.05  0.13  0.17  0.22  0.30  3063    1"
## [44] "mu_b[33,34]        0.01       0 0.06 -0.11 -0.03  0.01  0.05  0.13  3228    1"
## [45] "mu_b[33,35]        0.51       0 0.06  0.39  0.47  0.51  0.56  0.64  2858    1"
## [46] "mu_b[33,36]       -0.05       0 0.06 -0.16 -0.09 -0.05 -0.01  0.07  3156    1"
## [47] "mu_b[33,37]       -0.56       0 0.07 -0.70 -0.61 -0.57 -0.52 -0.43  3464    1"
## [48] "mu_b[33,38]        0.24       0 0.06  0.12  0.19  0.24  0.28  0.36  3191    1"
## [49] "mu_b[33,39]        0.10       0 0.06 -0.02  0.06  0.10  0.14  0.21  2941    1"
## [50] "mu_b[33,40]        0.31       0 0.08  0.17  0.26  0.32  0.36  0.46  4000    1"
## [51] "mu_b[33,41]       -0.13       0 0.06 -0.26 -0.17 -0.13 -0.09 -0.01  2906    1"
## [52] "mu_b[33,42]       -0.40       0 0.08 -0.55 -0.45 -0.40 -0.35 -0.25  3715    1"
## [53] "mu_b[33,43]       -0.35       0 0.07 -0.47 -0.39 -0.35 -0.31 -0.22  2839    1"
## [54] "mu_b[33,44]       -0.21       0 0.06 -0.33 -0.25 -0.21 -0.17 -0.08  2739    1"
## [55] "mu_b[33,45]       -0.33       0 0.06 -0.46 -0.38 -0.34 -0.29 -0.21  3258    1"
## [56] "mu_b[33,46]        0.15       0 0.06  0.03  0.11  0.15  0.19  0.27  3071    1"
## [57] "mu_b[33,47]        0.76       0 0.08  0.62  0.71  0.76  0.82  0.91  3442    1"
## [58] "mu_b[33,48]        0.32       0 0.07  0.18  0.27  0.32  0.36  0.45  3300    1"
## [59] "mu_b[33,49]        0.10       0 0.06 -0.01  0.06  0.10  0.14  0.22  3067    1"
## [60] "mu_b[33,50]       -0.59       0 0.07 -0.73 -0.64 -0.59 -0.55 -0.45  3145    1"
## [61] "mu_b[33,51]       -0.92       0 0.08 -1.08 -0.98 -0.92 -0.87 -0.75  3807    1"
## [62] ""                                                                             
## [63] "Samples were drawn using NUTS(diag_e) at Fri Nov  4 23:39:23 2016."           
>>>>>>> origin/master
## [64] "For each parameter, n_eff is a crude measure of effective sample size,"       
## [65] "and Rhat is the potential scale reduction factor on split chains (at "        
## [66] "convergence, Rhat=1)."